Some upper bounds for the rate of convergence of penalized likelihood context tree estimators
نویسندگان
چکیده
Abstract. We find upper bounds for the probability of underestimation and overestimation errors in penalized likelihood context tree estimation. The bounds are explicit and applies to processes of not necessarily finite memory. We allow for general penalizing terms and we give conditions over the maximal depth of the estimated trees in order to get strongly consistent estimates. This generalizes previous results obtained in the case of estimation of the order of a Markov chain.
منابع مشابه
On the Rate of Convergence of Penalized Likelihood Context Tree Estimators
Abstract. We find an upper bound for the probability of error of penalized likelihood context tree estimators. The bound is explicit and applies to processes of unbounded memory, that constitute a subclass of infinite memory processes. We show that the maximal exponential decay for the probability of error is typically achieved with a penalizing term of the form n/log(n), where n is the sample ...
متن کاملPenalized Likelihood-type Estimators for Generalized Nonparametric Regression
We consider the asymptotic analysis of penalized likelihood type estimators for generalized non-parametric regression problems in which the target parameter is a vector valued function defined in terms of the conditional distribution of a response given a set of covariates. A variety of examples including ones related to generalized linear models and robust smoothing are covered by the theory. ...
متن کاملGeneralized Nonparametric Regression via Penalized Likelihood
We consider the asymptotic analysis of penalized likelihood type estimators for generalized non-parametric regression problems in which the target parameter is a vector valued function defined in terms of the conditional distribution of a response given a set of covariates, A variety of examples including ones related to generalized linear models and robust smoothing are covered by the theory. ...
متن کاملRate of convergence of penalized likelihood context tree estimators
The Bayesian Information Criterion (BIC) was first proposed by Schwarz (1978) as a model selection technique. It was thought that BIC was not appropriate for the case of context tree estimation, because of the huge number of trees that has to be tested. Recently, Csiszár & Talata (2006) proved the almost surely consistency of the BIC estimator and they also showed that it can be computed in lin...
متن کاملPenalized Estimators in Cox Regression Model
The proportional hazard Cox regression models play a key role in analyzing censored survival data. We use penalized methods in high dimensional scenarios to achieve more efficient models. This article reviews the penalized Cox regression for some frequently used penalty functions. Analysis of medical data namely ”mgus2” confirms the penalized Cox regression performs better than the cox regressi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2009